Paul Robertson NEID Instrument Status on WIYN and the GTO Science Program Results Great. So I'll get right into it. That was fine. What is NEID? NEID is an extremely precise radial velocity spectrometer. Designed and built to allow the community to conduct a variety of experiments to discover and characterize exoplanets. Basically, you ask for it. We built it. So some basic statistics here. It's on the 3 1/2 meter wind telescope at Kit Peak. It has a design specified instrument precision better than 50 centimeters per second. Very broadband wavelength coverage mostly centered in the optical, but actually going somewhat into the near infrared. We have two observing modes, two different fibers that allow you to choose between the highest possible resolving power, which is about 115,000, or you can give a little bit of that back in order to increase throughput. To do fainter targets. I feel really fortunate to be giving this presentation on behalf of a a huge team of extraordinarily talented scientists, many of whom are here in the room. So I've underlined those folks. I'm sure any of them would be happy to talk to you more about new add. I'll start by highlighting a few. Really exciting recent science results there. There's a large community of people using NEID and and producing great science. This is a result that we're really excited about. This is a singly transiting test object. Intermediate period. I think it's about 190 days and newer combination of high precision and queue scheduled observations were essential for characterizing this planet. So we were able to resolve the extremely eccentric orbit. This is the most eccentric exoplanet transiting exoplanet ever discovered. It's got an eccentricity of around .95. So we were able to follow that up basically. Iterate the observing cadence in real time. But we were also able to measure the roster effect of this planet and those of you who are familiar with looking at these things will instantly recognize that that's backwards. This planet is retrograde. And so we interpret this system as being essentially a proto hot Jupiter that's experiencing or has experienced some recent dynamical violence as and is in that high eccentricity migration track on to being a hot Jupiter. Another emerging science case for radial velocity exoplanet characterization is confirmation of exoplanets discovered through Gaia astrometry. So Gaia has this incredible Astro metric precision capable of discovering new exoplanets, which is something we've been waiting for since Vande Camp. We we've certainly used astronomy to characterize exoplanets, but actually discovering them for the first time with astronomy has remained elusive. And so Gaia is capable of this. But there are false positive scenarios and so this is kind of an illustration of that. If you have a near equal mass stellar binary that can cause photocentric motion that is extremely similar to what you would expect. The star being orbited by a dark exoplanet companion. And in a study that our team led, we found that of the list of good looking targets from Gaia, something like 75% of those actually ended up being false positives. And so radio velocities are essential to confirm these systems. And so that's what we have done. So this is a paper that was just accepted, I think 2 days ago where we use NEID radio velocities to confirm the orbits of Gaia 3B and 4B. So these are among the first. Astronautic planet discoveries from Gaia. I wanna emphasize also that it's not just our nude science team that I showed on that slide a moment ago. That's taking advantage of new add and producing really outstanding results. This is another example of Rossner McLaughlin Science with new. Add this facility again because of the precision and fully queue scheduled operations is really useful for conducting radial or Rossiter specifically surveys. In bulk and so you can start to get demographic properties of exoplanet systems and start to be able to say something about planet formation and migration. And so this is an example of a set of sub Saturn exoplanets in compact multi systems. And so these are systems that we believe are consistent with convergent migration. And what we see here on the left is one. Such system. The roster curve from NEID and on the right you see in orange, the five compact system sub saturns compared to the population broader population of isolated sub saturns and you see that the ones in the compact multis. Are are all at basically 0 obliquity. They're all well aligned with the stellar spin axis, which is indicating, potentially that these systems. All form in low obliquity States and and have to be dynamically excited to anything else and so this is again a project led by scientists other than the new instrument team. NEID contributed significantly to this. So of the five systems you see there, three of those were they had their obliquities measured with newit and and that's all of them. Since NEID came online. What I want to talk to you about primarily this afternoon, though, is the survey that we've been conducting through the instrument teams guaranteed time. This is the NEID Earth twin survey or Nets. So this is a program, a blind radial velocity survey intended to discover new low mass exoplanets around nearby stars. So this graphic shows what we expected to be able to be sensitive to given time various time allocations. Over a nominal 5 year survey time scale and what the gold curves are showing is what we expect to be sensitive to given what we had as our design error, budget measurement, precision of about 27 centimeters per second, single measurement precision. So and then at various allocations, we expected to be sensitive to one earth mass planets everywhere left of the gold curve you see. So this is an ambiguous or, sorry. It's an ambitious project. You can see more details of the survey design and ranked target selection list in Arvind Gupta's paper from 2021. But I want to just kind of pause here and acknowledge, OK. This is a really compelling graphic, but it's only meaningful if we can achieve two different things. One is the instrument actually has to be stable enough to live up to that 27 centimeter per second billing that we built this around. And even if we do that, we have to be able to control the systematics from stellar variability down to the point where that single measurement precision number is actually meaningful. And So what I want to do with the rest of my time here is argue that we have in fact made significant progress on both of those items. So I'll start with instrument stability and this is admittedly difficult to characterize, especially on sky. But one of the things that we can do is look at the roll up of. Instrument and stellar stability on radio velocity standards. And so this is showing one of the RV community's favorite standards. Tausetti this is about a 1500 day baseline and I haven't done anything here except do a basic signal noise cut and solve for the zero point offset that happened a couple years ago when we had a wildfire induced thermal break to the system. So what we see is over that time again the roll up of instrumental plus stellar variability. Is 90 centimeters per second and what I think is interesting about this is I I gave a presentation on NEID about almost exactly a year ago. I gave this exact slide with a years less baseline and we were at about a meter per second. So over a year. Our scatter has actually dropped by 10% and I think that this is there are two I think primary causes for this. One is that we have simply maintained our instrument stability. And averaged over a period of increased stellar activity. So the stars gotten a little quieter, but also a core component of the new project is ongoing pipeline development and in fact very recently version 1.4 of the pipeline was released and it's the data are being re reduced right now. And so I think you see we can make the hardware stable as we want and it's useless to all of you if we don't have a working pipeline and so that. Continued investment in the pipeline is paying dividends and we see evidence of that here. So at this point, we're limited in our sensitivity by the stellar variability. So what are we doing about that and how is it going? One of the advantages we have with new IT is the solar feed. I think this was mentioned earlier today already, but you see we have a solar tracker that's on the roof of wind, has a fibre to the instrument and we just take solar Spectra. As often as we can during daytime hours and this has been invaluable. Both to us internally and also to the Community, there's been a lot of really interesting collaborative projects with people sharing their solar streams from various instruments and it allows us to test a variety of activity mitigation techniques. And I want to highlight this recent result that was put on archive by Eric Ford comparing a lot of different mitigation techniques and and finding in particular. That the scalpel's method, which is a technique that is actually trying to resolve the line shaped deformation. Caused by stellar variability. And this is something that and you see here the comparison when we clean the Spectra with the scalpel's algorithm, we separate the distortions to the spectrum caused by pure Doppler shifts versus line shaped changes. And when we do that, we see that we get the scatter of the solar RV stream down to something that looks very similar to that 27 centimeters per second. Number that I quoted earlier in the top right corner there. In the inset, I'm showing that we can achieve this in something like an hour's worth of integration. So we don't need the signal and noise of a full day's worth of solar observations in order to achieve this, which is potentially really promising for our on sky targets. And so this is the first indication that we really can tackle the stellar variability problem to the. Level that we need to discover. The planets that Nets set out to do. Actually starting to do that. So we recently published the first Nets exoplanet. This is A/C 86728 B. It's an object that was identified as a candidate from previous surveys, but there wasn't enough statistical significance to fully claim a discovery on the left. I show those archival data and on the right the newit R vs and I'll point out that on the left the archival. Velocities, those are phase bend on the right are individual measurements. So you can see both the cadence and the precision that we're able to achieve and make a high confidence detection of that planet. This is kind of an interesting planet. It's kind of in that Super Earth sub Neptune range, but it's period in mass when you see compared to other. Planets in that bin. It's unusual in the sense that it doesn't have any known planetary companions, which most most objects in that range do. Somewhat mysterious and interesting, and I I can understand your skepticism because I had the same thoughts like, well, we know planets are everywhere. You just haven't found them yet, and maybe that's true, but if it is true, we can actually place pretty stringent limits on this. So this is the power of again the cadence precision of the net survey. So this is showing that planet as a star. And then for this system where we've placed limits, sinceitivity limits 10 Sigma 5, Sigma 3 Sigma. And you can see there. One Earth mass sensitivity and and you know we're we're getting down the sensitivity to terrestrial mass planets across a fairly broad range of orbital periods and and this is only for about 3 years of our nominal you know five year survey baseline and so with continued investment of. Time we can push these lines. Both down and to the right, towards smaller and longer period planets. I'll conclude by coming back to the stellar variability problem and acknowledged that techniques like scalpels work best on bright stars where you can build up a lot of signal and noise and resolve those line shaped deformations. But NEID produces a lot of other activity tracers, so we produce the canonical spectral line indicators like calcium H&K I've shown here for a Nets target. We produce line shaped parameters we produce. All kinds of things that can be useful across a broad range of stars at various brightnesses. So this is one where we've resolved this gorgeous eight-year magnetic cycle for the star. And you might look at the new calcium values and say, well, there's a lot of scatter on that that doesn't look very good if we zoom in, we see that we're actually resolving the stellar rotation. So this is the fidelity of the calcium line measurements. And again the observing cadence. This is a 2 component Gaussian process regression that's been shown here that resolves both the magnetic cycle and the stellar rotation period. And what's interesting about this system is that when we do this model for calcium H&K, that model looks very similar to the shape component of the SCALPEL'S algorithm on the same target. So this is giving us an indication that even for targets where. Scalpels is impractical because of the stellar brightness. Some of the other data products were produced. Will be similarly useful for disentangling planets from stellar activity, and in fact we have identified here the next Nets exoplanet that we're getting ready to publish. This is a detection below the 1M per second amplitude threshold, and so we're really excited to share more of that with you soon. I'm running out of time, so I'll leave up some conclusions and mainly just emphasize. We're showing clearly from the solar Rvs that we have broken through the 50 centimeter per second barrier and we would really like to probe deeper to see how how much closer. To the 10 centimeter per second threshold. This needed to find Earth twins we can get. This can be done, but it needs a significant investment of observing time and specifically what we need that for. We need to beat down photon noise. If if we're going to measure things at 10 centimeters per second, we've got to expose to a photon noise level of that. That's not cheap. We also just simply need. The raw number of observations required to detect low amplitude signals. And finally, we need this over long baseline. So we're sensitive to longer period orbits. With that, I'll thank you and take any questions. Do we have any questions? One quick question, if you don't mind. Sure, with the remaining time, are you planning to keep the observing cadence the same or are you going to adjust it based off of like the existing observations you have in the survey? No, so, so we tried our best to set the observing cadence from the outset, so that at the end it is as simple as possible for the statisticians to back out things like occurrence rates. So we really don't want to do that. And the plan is to continue at the current cadence, OK. Great. That was the answer I was hoping for. Glad I could provide it. I think we're at the end of our time is, yeah, let's think Paul again and.